(S1/2) regularization methods and fixed point algorithms for affine rank minimization problems

نویسندگان

  • Dingtao Peng
  • Naihua Xiu
  • Jian Yu
چکیده

The affine rank minimization problem is to minimize the rank of a matrix under linear constraints. It has many applications in various areas such as statistics, control, system identification and machine learning. Unlike the literatures which use the nuclear norm or the general Schatten q (0 < q < 1) quasi-norm to approximate the rank of a matrix, in this paper we use the Schatten 1/2 quasi-norm approximation which is a better approximation than the nuclear norm but leads to a nonconvex, nonsmooth and non-Lipschitz optimization problem. It is important that we give a globally necessary optimality condition for the S1/2 regularization problem by virtue of the special objective function. This is very different from the local optimality conditions usually used for the general Sq regularization problems. Explicitly, the globally optimality condition for the S1/2 regularization problem is a fixed point equation associated with the singular value half thresholding operator. Naturally, we propose a fixed point iterative scheme for the problem. We also provide the convergence analysis of this iteration. By discussing the location and setting of the optimal regularization parameter as well as using an approximate singular value decomposition procedure, we get a very efficient algorithm, half norm fixed point algorithm with an approximate SVD (HFPA algorithm), for the S1/2 regularization problem. Numerical experiments on randomly generated and real matrix completion problems are presented to demonstrate the effectiveness of the proposed algorithm.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fixed Point and Bregman Iterative Methods for Matrix Rank Minimization

The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matr...

متن کامل

Convergence of fixed point continuation algorithms for matrix rank minimization

The matrix rank minimization problem has applications in many fields such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm minimization pr...

متن کامل

Piecewise Differentiable Minimization for Ill-posed Inverse Problems

Based on minimizing a piecewise differentiable lp function subject to a single inequality constraint, this paper discusses algorithms for a discretized regularization problem for ill-posed inverse problems. We examine computational challenges of solving this regularization problem. Possible minimization algorithms such as the steepest descent method, iteratively weighted least squares (IRLS) me...

متن کامل

Optimization Approaches for Learning with Low-rank Regularization

Low-rank modeling has a lot of important applications in machine learning, computer vision and social network analysis. As direct rank minimization is NP hard, many alternative choices have been proposed. In this survey, we first introduce optimization approaches for two popular methods on rank minimization, i.e., nuclear norm regularization and rank constraint. Nuclear norm is the tightest con...

متن کامل

Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization

The Schatten-p quasi-norm (0<p<1) is usually used to replace the standard nuclear norm in order to approximate the rank function more accurately. However, existing Schattenp quasi-norm minimization algorithms involve singular value decomposition (SVD) or eigenvalue decomposition (EVD) in each iteration, and thus may become very slow and impractical for large-scale problems. In this paper, we fi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Comp. Opt. and Appl.

دوره 67  شماره 

صفحات  -

تاریخ انتشار 2017